1,351 research outputs found

    Evidence for the use of a Diamond Drill for Bead Making in Sri-Lanka

    Get PDF
    The use of a diamond splinter turned by a bow drill to drill the quartz beads in present day Cambay, India has been documented. A group of Cambay beads were made available for study. They were compared with a similar group of quartz beads excavated in Mantai, Sri-Lanka. These were dated stratigraphically c.700-1000 A.D. Silicone impressions were made of the drill holes from selected beads from both Cambay and Mantai. These were examined by means of scanning electron microscopy. The pattern of drilling was the same, suggesting that the technique of drilling with a diamond splinter and bow drill was an ancient one. This has not been previously reported

    Statistical Methods for Large Flight Lots and Ultra-high Reliability Applications

    Get PDF
    We present statistical techniques for evaluating random and systematic errors for use in flight performance predictions for large flight lots and ultra-high reliability applications

    From Traditional to Modern : Domain Adaptation for Action Classification in Short Social Video Clips

    Full text link
    Short internet video clips like vines present a significantly wild distribution compared to traditional video datasets. In this paper, we focus on the problem of unsupervised action classification in wild vines using traditional labeled datasets. To this end, we use a data augmentation based simple domain adaptation strategy. We utilise semantic word2vec space as a common subspace to embed video features from both, labeled source domain and unlablled target domain. Our method incrementally augments the labeled source with target samples and iteratively modifies the embedding function to bring the source and target distributions together. Additionally, we utilise a multi-modal representation that incorporates noisy semantic information available in form of hash-tags. We show the effectiveness of this simple adaptation technique on a test set of vines and achieve notable improvements in performance.Comment: 9 pages, GCPR, 201

    Statistical Model Selection for TID Hardness Assurance

    Get PDF
    Radiation Hardness Assurance (RHA) methodologies against Total Ionizing Dose (TID) degradation impose rigorous statistical treatments for data from a part's Radiation Lot Acceptance Test (RLAT) and/or its historical performance. However, no similar methods exist for using "similarity" data - that is, data for similar parts fabricated in the same process as the part under qualification. This is despite the greater difficulty and potential risk in interpreting of similarity data. In this work, we develop methods to disentangle part-to-part, lot-to-lot and part-type-to-part-type variation. The methods we develop apply not just for qualification decisions, but also for quality control and detection of process changes and other "out-of-family" behavior. We begin by discussing the data used in the study and the challenges of developing a statistic providing a meaningful measure of degradation across multiple part types, each with its own performance specifications. We then develop analysis techniques and apply them to the different data sets

    Blocking premature reverse transcription fails to rescue the HIV-1 nucleocapsid-mutant replication defect

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The nucleocapsid (NC) protein of HIV-1 is critical for viral replication. Mutational analyses have demonstrated its involvement in viral assembly, genome packaging, budding, maturation, reverse transcription, and integration. We previously reported that two conservative NC mutations, His23Cys and His44Cys, cause premature reverse transcription such that mutant virions contain approximately 1,000-fold more DNA than wild-type virus, and are replication defective. In addition, both mutants show a specific defect in integration after infection.</p> <p>Results</p> <p>In the present study we investigated whether blocking premature reverse transcription would relieve the infectivity defects, which we successfully performed by transfecting proviral plasmids into cells cultured in the presence of high levels of reverse transcriptase inhibitors. After subsequent removal of the inhibitors, the resulting viruses showed no significant difference in single-round infective titer compared to viruses where premature reverse transcription did occur; there was no rescue of the infectivity defects in the NC mutants upon reverse transcriptase inhibitor treatment. Surprisingly, time-course endogenous reverse transcription assays demonstrated that the kinetics for both the NC mutants were essentially identical to wild-type when premature reverse transcription was blocked. In contrast, after infection of CD4+ HeLa cells, it was observed that while the prevention of premature reverse transcription in the NC mutants resulted in lower quantities of initial reverse transcripts, the kinetics of reverse transcription were not restored to that of untreated wild-type HIV-1.</p> <p>Conclusions</p> <p>Premature reverse transcription is not the cause of the replication defect but is an independent side-effect of the NC mutations.</p

    Multi-Way Multi-Group Segregation and Diversity Indices

    Get PDF
    Background: How can we compute a segregation or diversity index from a three-way or multi-way contingency table, where each variable can take on an arbitrary finite number of values and where the index takes values between zero and one? Previous methods only exist for two-way contingency tables or dichotomous variables. A prototypical three-way case is the segregation index of a set of industries or departments given multiple explanatory variables of both sex and race. This can be further extended to other variables, such as disability, number of years of education, and former military service. Methodology/Principal Findings: We extend existing segregation indices based on Euclidean distance (square of coefficient of variation) and Boltzmann/Shannon/Theil index from two-way to multi-way contingency tables by including multiple summations. We provide several biological applications, such as indices for age polyethism and linkage disequilibrium. We also provide a new heuristic conceptualization of entropy-based indices. Higher order association measures are often independent of lower order ones, hence an overall segregation or diversity index should be the arithmetic mean of the normalized association measures at all orders. These methods are applicable when individuals selfidentify as multiple races or even multiple sexes and when individuals work part-time in multiple industries. Conclusions/Significance: The policy implications of this work are enormous, allowing people to rigorously test whethe

    Unbiased Shape Compactness for Segmentation

    Full text link
    We propose to constrain segmentation functionals with a dimensionless, unbiased and position-independent shape compactness prior, which we solve efficiently with an alternating direction method of multipliers (ADMM). Involving a squared sum of pairwise potentials, our prior results in a challenging high-order optimization problem, which involves dense (fully connected) graphs. We split the problem into a sequence of easier sub-problems, each performed efficiently at each iteration: (i) a sparse-matrix inversion based on Woodbury identity, (ii) a closed-form solution of a cubic equation and (iii) a graph-cut update of a sub-modular pairwise sub-problem with a sparse graph. We deploy our prior in an energy minimization, in conjunction with a supervised classifier term based on CNNs and standard regularization constraints. We demonstrate the usefulness of our energy in several medical applications. In particular, we report comprehensive evaluations of our fully automated algorithm over 40 subjects, showing a competitive performance for the challenging task of abdominal aorta segmentation in MRI.Comment: Accepted at MICCAI 201

    Detecting and Classifying Nuclei on a Budget

    Get PDF
    The benefits of deep neural networks can be hard to realise in medical imaging tasks because training sample sizes are often modest. Pre-training on large data sets and subsequent transfer learning to specific tasks with limited labelled training data has proved a successful strategy in other domains. Here, we implement and test this idea for detecting and classifying nuclei in histology, important tasks that enable quantifiable characterisation of prostate cancer. We pre-train a convolutional neural network for nucleus detection on a large colon histology dataset, and examine the effects of fine-tuning this network with different amounts of prostate histology data. Results show promise for clinical translation. However, we find that transfer learning is not always a viable option when training deep neural networks for nucleus classification. As such, we also demonstrate that semi-supervised ladder networks are a suitable alternative for learning a nucleus classifier with limited data
    • …
    corecore